-
Notifications
You must be signed in to change notification settings - Fork 122
Add Tor support for outbound connections via SOCKS #778
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add Tor support for outbound connections via SOCKS #778
Conversation
|
I've assigned @tnull as a reviewer! |
tankyleo
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR !
| .next() | ||
| .ok_or_else(|| { | ||
| log_error!(self.logger, "Failed to resolve network address {}", addr); | ||
| let res = if let SocketAddress::OnionV2(old_onion_addr) = addr { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: did you consider using a match statement here?
| self.propagate_result_to_subscribers(&node_id, Err(Error::InvalidSocketAddress)); | ||
| Error::InvalidSocketAddress | ||
| })?; | ||
| let connection_future = lightning_net_tokio::tor_connect_outbound( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to allow people to proxy the other types over tor as well ? Have to be careful with adding more settings, but I'm thinking of an analog to the always-use-proxy=true CLN setting.
Could be done in a follow-up if preferred.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, that would be a useful setting...
I'm open to add that on to this PR, would need another cfg setting I suppose.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, that would be a useful setting...
I'm open to add that on to this PR, would need another cfg setting I suppose.
Let's just add this as a flag in above mentioned TorConfig struct?
| let connection_manager = Arc::new(ConnectionManager::new( | ||
| Arc::clone(&peer_manager), | ||
| config.tor_proxy_address.clone(), | ||
| ephemeral_bytes, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will be used to derive passwords sent in plaintext to the SOCKS5 proxy, so out of caution I would have passed a seed here that is different from the seed passed to the PeerManager to derive BOLT 8 per-connection ephemeral keys.
The KeysManager entropy source is also a chacha20 so seems to me we can cheaply do another keys_manager.get_secure_random_bytes();
| node_id, | ||
| addr.clone(), | ||
| proxy_addr, | ||
| self.tor_proxy_rng.clone(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Would have used Arc::clone(&self.tor_proxy_rng) here, same style as the peer manager above, just to make it very clear we are only creating a new pointer to the same state, and not cloning the state itself.
|
🔔 1st Reminder Hey @tnull! This PR has been waiting for your review. |
|
@jharveyb Thanks for pointing out these quirks. From my reading, we are still onside, so I would expect everything to work on on the current main branch. For the outbound connection case, we could add an While we don't have native support for inbound tor connections yet, |
|
this is the patch I have in mind @tnull curious your thoughts. diff --git a/lightning-net-tokio/src/lib.rs b/lightning-net-tokio/src/lib.rs
index eec0e424e..104fd8595 100644
--- a/lightning-net-tokio/src/lib.rs
+++ b/lightning-net-tokio/src/lib.rs
@@ -378,12 +378,12 @@ where
/// futures are freed, though, because all processing futures are spawned with tokio::spawn, you do
/// not need to poll the provided future in order to make progress.
pub fn setup_outbound<PM: Deref + 'static + Send + Sync + Clone>(
- peer_manager: PM, their_node_id: PublicKey, stream: StdTcpStream,
+ peer_manager: PM, their_node_id: PublicKey, stream: StdTcpStream, remote_addr_override: Option<SocketAddress>,
) -> impl std::future::Future<Output = ()>
where
PM::Target: APeerManager<Descriptor = SocketDescriptor>,
{
- let remote_addr = get_addr_from_stream(&stream);
+ let remote_addr = remote_addr_override.or(get_addr_from_stream(&stream));
let (reader, mut write_receiver, read_receiver, us) = Connection::new(stream);
#[cfg(test)]
let last_us = Arc::clone(&us);
@@ -469,7 +469,7 @@ where
if let Ok(Ok(stream)) =
time::timeout(Duration::from_secs(CONNECT_OUTBOUND_TIMEOUT), connect_fut).await
{
- Some(setup_outbound(peer_manager, their_node_id, stream))
+ Some(setup_outbound(peer_manager, their_node_id, stream, None))
} else {
None
}
@@ -488,12 +488,12 @@ where
PM::Target: APeerManager<Descriptor = SocketDescriptor>,
{
let connect_fut = async {
- tor_connect(addr, tor_proxy_addr, entropy_source).await.map(|s| s.into_std().unwrap())
+ tor_connect(addr.clone(), tor_proxy_addr, entropy_source).await.map(|s| s.into_std().unwrap())
};
if let Ok(Ok(stream)) =
time::timeout(Duration::from_secs(TOR_CONNECT_OUTBOUND_TIMEOUT), connect_fut).await
{
- Some(setup_outbound(peer_manager, their_node_id, stream))
+ Some(setup_outbound(peer_manager, their_node_id, stream, Some(addr)))
} else {
None
}
@@ -1015,7 +1015,7 @@ mod tests {
// 127.0.0.1.
let (conn_a, conn_b) = make_tcp_connection();
- let fut_a = super::setup_outbound(Arc::clone(&a_manager), b_pub, conn_a);
+ let fut_a = super::setup_outbound(Arc::clone(&a_manager), b_pub, conn_a, None);
let fut_b = super::setup_inbound(b_manager, conn_b);
tokio::time::timeout(Duration::from_secs(10), a_connected.recv()).await.unwrap();
@@ -1085,7 +1085,7 @@ mod tests {
// Call connection setup inside new tokio tasks.
let manager_reference = Arc::clone(&a_manager);
tokio::spawn(async move { super::setup_inbound(manager_reference, conn_a).await });
- tokio::spawn(async move { super::setup_outbound(a_manager, b_pub, conn_b).await });
+ tokio::spawn(async move { super::setup_outbound(a_manager, b_pub, conn_b, None).await });
}
#[tokio::test(flavor = "multi_thread")] |
tnull
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, thanks for taking a look!
Already looks pretty good, but here are a few comments after the first pass. Do you think there is a good way to add test coverage for this to CI?
| /// | ||
| /// [`tor_proxy_address`]: Config::tor_proxy_address | ||
| pub fn set_tor_proxy_address(&mut self, tor_proxy_address: core::net::SocketAddr) { | ||
| self.config.tor_proxy_address = Some(tor_proxy_address); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than replicating the functionality, please just call through to inner as we do elsewhere in this builder. This avoids their behavior getting out-of-sync over time.
| /// **Note**: If unset, connecting to peer OnionV3 addresses will fail. | ||
| /// | ||
| /// [`tor_proxy_address`]: Config::tor_proxy_address | ||
| pub tor_proxy_address: Option<core::net::SocketAddr>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I think it would be preferable to drop this field and just add a separate Option<TorConfig> or similar. The struct TorConfig would then also allow to add tor-specific settings, such as the ones mentioned below.
Also note that we'll want to expose all of the new APIs to bindings, so you'll need to make the corresponding changes in bindings/ldk_node.udl and use SocketAddress, not SocketAddr.
| self.propagate_result_to_subscribers(&node_id, Err(Error::InvalidSocketAddress)); | ||
| Error::InvalidSocketAddress | ||
| })?; | ||
| let connection_future = lightning_net_tokio::tor_connect_outbound( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, that would be a useful setting...
I'm open to add that on to this PR, would need another cfg setting I suppose.
Let's just add this as a flag in above mentioned TorConfig struct?
Hmm, could make sense, but maybe we should only expose the override field on the Tor-specific API? |
Addresses #178 . Builds on lightningdevkit/rust-lightning#4305 .
As mentioned there, I see two drawbacks with this patch as is:
https://github.com/lightningdevkit/rust-lightning/blob/f9ad3450b7d8b722b440f0a5e3d9be8bd7a696ae/lightning-net-tokio/src/lib.rs#L332
Which affects
list_peers()output:{ "pubkey": { "pubkey": "03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f" }, "socket_addr": { "address": "3.33.236.230:9735" }, "is_connected": true }, { "pubkey": { "pubkey": "03fe47fdfea0f25fad0013498e8d6cec348ae3d673841ec25ee94f87c21af16ed8" }, "socket_addr": { "address": "127.0.0.1:9050" }, "is_connected": true }But more significantly, AFAICT affects our setup messages with our peer:
https://github.com/lightningdevkit/rust-lightning/blob/f9ad3450b7d8b722b440f0a5e3d9be8bd7a696ae/lightning/src/ln/peer_handler.rs#L1905
The
filter_addresses()call here means we won't include the proxy address, but we also won't include that peer's onion address, which we may want to do?I'm testing this actively now and haven't hit any issues yet, though I haven't had much traffic with that peer either.